EGMn: The Sequential Endogenous Grid Method

IAAE 2024 – Thessaloniki, Greece

Alan Lujan

Johns Hopkins University
Econ-ARK

June 26, 2024

Motivation

  • Structural Economics for modeling decision-making under uncertainty
    • household: consumption, savings, labor, portfolio, retirement
    • firms: production, investment, hiring, entry/exit
    • governments: fiscal and monetary policy, taxation, redistribution
    • interdisciplinary: climate change, public health, education, etc.
  • Structural modeling is hard
    • modern economics requires solving complex problems
    • with many state variables, many decisions, and non-convexities
    • computationally challenging and time-consuming

Outline

  • Dynamic Programming
    • The Endogenous Grid Method
    • The Sequential Endogenous Grid Method
  • Functional Approximation
    • Conventional techniques are insufficient for complex problems
    • Neural Nets as function approximators
    • Gaussian Process Regression
  • Conclusion
    • Sequential problems are easier to solve
    • GPR is a powerful tool for interpolation on unstructured grids

Dynamic Programming

A simple consumption-savings problem

Agent maximizes present discounted value (PDV) of lifetime utility

\[\begin{equation} \max_{c_t} \sum_{t=0}^{\infty} \beta^t \mathrm{u}(c_t) \end{equation}\]

Recursive Bellman equation

\[\begin{equation} \begin{split} v_t(m_t) & = \max_{c_t} \mathrm{u}(c_t) + \beta\mathbb{E}_t \left[ v_{t+1}(m_{t+1}) \right] \\ \text{s.t.} & \quad 0 < c_t \leq m_t \\ a_t & = m_t - c_t \\ m_{t+1} & = \mathsf{R}a_t + \theta_{t+1} \end{split} \end{equation}\]

A simple consumption-savings problem

Recursive Bellman equation

\[\begin{equation} \begin{split} v_t(m_t) & = \max_{c_t} \mathrm{u}(c_t) + \beta\mathbb{E}_t \left[ v_{t+1}(m_{t+1}) \right] \\ \text{s.t.} & \quad 0 < c_t \leq m_t \\ a_t & = m_t - c_t \\ m_{t+1} & = \mathsf{R}a_t + \theta_{t+1} \end{split} \end{equation}\]

How do we solve this problem?

  • Value Function Iteration (VFI)
    • Discretize state space (interpolation)
    • Grid search optimization (brute force, iterative)

The Endogenous Grid Method
by (Carroll2006-ag?)

\[\begin{equation} v_t(m_t) = \max_{c_t} \mathrm{u}(c_t) + \beta\mathbb{E}_t \left[ v_{t+1}(\mathsf{R}(m_t - c_t) + \theta_{t+1}) \right] \end{equation}\]

\[\begin{equation} u'(c_t) = \beta\mathsf{R}\mathbb{E}_t \left[ v_{t+1}'(\mathsf{R}(m_t - c_t) + \theta_{t+1}) \right] \end{equation}\]

\[\begin{equation} \mathfrak{c}_t([\mathrm{a}]) = \mathrm{u}'^{-1} \left( \beta\mathsf{R}\mathbb{E}_t \left[ v_{t+1}'(\mathsf{R}[\mathrm{a}]+ \theta_{t+1}) \right] \right) \end{equation}\]

\[\begin{equation} \mathfrak{m}_t([\mathrm{a}]) = \mathfrak{c}_t([\mathrm{a}]) + [\mathrm{a}] \end{equation}\]

Contribution: \((\mathfrak{m}_t, \mathfrak{c}_t) \quad \Rightarrow \quad \hat{c}_t(\mathfrak{m}_t) = \mathfrak{c}_t\)

  • Simple: Inverted Euler equation
  • Fast: No root-finding or grid search optimization required
  • Efficient: Finds exact solution at each gridpoint

Limitations of EGM

EGMn: The Sequential
Endogenous Grid Method

  • Insight: Problems in which agent makes several simultaneous choices can be decomposed into sequence of problems
  • Challenge: Rectilinear exogenous grid results in unstructured endogenous grid
  • Solution: Use machine learning to interpolate on unstructured grids

Contribution:

  • Simple, Fast, Efficient: Inherits properties of EGM
  • Multi-dimensional: Can be used for problems with multiple state variables and decisions
  • Cutting-edge: Functional approximation and uncertainty quantification approach using Gaussian Process Regression

A consumption - leisure problem

Agent maximizes PDV of lifetime utility

\[\begin{equation} \mathrm{V}_0(B_0, \theta_0) = \max_{C_{t}, Z_{t}} \mathbb{E}_{t} \left[ \sum_{t = 0}^{T} \beta^{t} \mathrm{u}(C_{t}, Z_{t}) \right] \end{equation}\]

Recursive Bellman equation in normalized form:

\[\begin{equation} \begin{split} \mathrm{v}_{t}(b_{t}, \theta_{t}) & = \max_{\{c_{t}, z_{t}\}} \mathrm{u}(c_{t}, z_{t}) + \beta\mathbb{E}_{t} \left[ \Gamma_{t+1}^{1-\rho} \mathrm{v}_{t+1} (b_{t+1}, \theta_{t+1}) \right] \\ \ell_{t} & = 1 - z_{t} \\ m_{t} & = b_{t} + \theta_{t} \mathsf{w}\ell_{t} \\ a_{t} & = m_{t} - c_{t} \\ b_{t+1} & = a_{t} \mathsf{R}/ \Gamma_{t+1} \end{split} \end{equation}\]

A consumption - leisure problem

Recursive Bellman equation in normalized form:

\[\begin{equation} \begin{split} \mathrm{v}_{t}(b_{t}, \theta_{t}) & = \max_{\{c_{t}, z_{t}\}} \mathrm{u}(c_{t}, z_{t}) + \beta\mathbb{E}_{t} \left[ \Gamma_{t+1}^{1-\rho} \mathrm{v}_{t+1} (b_{t+1}, \theta_{t+1}) \right] \\ \ell_{t} & = 1 - z_{t} \\ m_{t} & = b_{t} + \theta_{t} \mathsf{w}\ell_{t} \\ a_{t} & = m_{t} - c_{t} \\ b_{t+1} & = a_{t} \mathsf{R}/ \Gamma_{t+1} \end{split} \end{equation}\]

where

\[\begin{equation} \mathrm{u}(C, Z) = u(C) + h(Z) = \frac{C^{1-\rho}}{1-\rho} + \nu^{1-\rho} \frac{Z^{1-\zeta}}{1-\zeta} \end{equation}\]

Breaking up the problem into sequences

Starting from the beginning of the period, we can define the labor-leisure problem as

\[\begin{equation} \begin{split} \mathrm{v}_{t}(b_{t}, \theta_{t}) & = \max_{ z_{t}} h(z_{t}) + \tilde{\mathfrak{v}}_{t} (m_{t}) \\ & \text{s.t.} \\ 0 & \leq z_{t} \leq 1 \\ \ell_{t} & = 1 - z_{t} \\ m_{t} & = b_{t} + \theta_{t} \mathsf{w}\ell_{t}. \end{split} \end{equation}\]

The pure consumption-saving problem is then

\[\begin{equation} \begin{split} \tilde{\mathfrak{v}}_{t}(m_{t}) & = \max_{c_{t}} u(c_{t}) + \beta\mathfrak{v}_{t}(a_{t}) \\ & \text{s.t.} \\ 0 & \leq c_{t} \leq m_{t} \\ a_{t} & = m_{t} - c_{t}. \end{split} \end{equation}\]

Breaking up the problem into sequences

Starting from the beginning of the period, we can define the labor-leisure problem as

\[\begin{equation} \begin{split} \mathrm{v}_{t}(b_{t}, \theta_{t}) & = \max_{ z_{t}} h(z_{t}) + \tilde{\mathfrak{v}}_{t} (m_{t}) \\ & \text{s.t.} \\ 0 & \leq z_{t} \leq 1 \\ \ell_{t} & = 1 - z_{t} \\ m_{t} & = b_{t} + \theta_{t} \mathsf{w}\ell_{t}. \end{split} \end{equation}\]

The pure consumption-saving problem is then

\[\begin{equation} \begin{split} \tilde{\mathfrak{v}}_{t}(m_{t}) & = \max_{c_{t}} u(c_{t}) + \beta\mathfrak{v}_{t}(a_{t}) \\ & \text{s.t.} \\ 0 & \leq c_{t} \leq m_{t} \\ a_{t} & = m_{t} - c_{t}. \end{split} \end{equation}\]

Finally, the post-decision value function is

\[\begin{equation} \begin{split} \mathfrak{v}_{t}(a_{t}) & = \mathbb{E}_{t} \left[ \Gamma_{t+1}^{1-\rho} \mathrm{v}_{t+1}(b_{t+1}, \theta_{t+1}) \right] \\ & \text{s.t.} \\ b_{t+1} & = a_{t} \mathsf{R}/ \Gamma_{t+1}. \end{split} \end{equation}\]

Solving Labor-Leisure (EGM, Again)

We can condense the labor-leisure problem into a single equation:

\[\begin{equation} \mathrm{v}_{t}(b_{t}, \theta_{t}) = \max_{ z_{t}} h(z_{t}) + \tilde{\mathfrak{v}}_{t}(b_{t} + \theta_{t} \mathsf{w}(1-z_{t})) \end{equation}\]

Interior solution must satisfy the first-order condition:

\[\begin{equation} h'(z_{t}) = \tilde{\mathfrak{v}}_{t}'(m_{t}) \theta_{t} \mathsf{w} \end{equation}\]

EGM consists of inverting the first-order condition to find leisure function:

\[\begin{equation} \mathfrak{z}_{t}([\mathrm{m}], [\mathrm{\theta}]) = h'^{-1}\left( \tilde{\mathfrak{v}}_{t}'([\mathrm{m}]) \mathsf{w}[\mathrm{\theta}]\right) \end{equation}\]

Solving Labor-Leisure (EGM, Again)

Using market resources condition we obtain endogenous grid:

\[\mathfrak{b}_{t}([\mathrm{m}], [\mathrm{\theta}]) = [\mathrm{m}]- [\mathrm{\theta}]\mathsf{w}(1-\mathfrak{z}_{t}([\mathrm{m}], [\mathrm{\theta}]))\]

So now we have the triple \((\mathfrak{z}_t, \mathfrak{b}_t, \theta)\), where \(\mathfrak{z}_t\) is the unconstrained approx. of optimal leisure for each \((\mathfrak{b}_t, \theta)\) corresponding to each \(([\mathrm{m}], [\mathrm{\theta}])\). Generally, we can construct an interpolator as follows:

\[z_t(\mathfrak{b}_t, \theta) = \mathfrak{z}_t\]

Actual leisure function is bounded between 0 and 1:

\[\begin{equation} \hat{z}_{t}(b, \theta) = \max \left[ \min \left[ z_{t}(b, \theta), 1 \right], 0 \right] \end{equation}\]

Pretty Simple, Right?

What is the problem?

Exogenous Rectangular Grid
2024-02-29T17:50:40.231245 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/
Endogenous Curvilinear Grid
2024-02-29T17:50:39.375484 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/
  • One solution: Curvilinear Interpolation by (White2015?)

Warped Grid Interpolation

  • Our solution: Warped Grid Interpolation (simpler, faster, more details on paper)
2024-02-29T17:52:27.858939 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

A more complex problem

Consumption - Pension Deposit Problem as in (Druedahl2017-ac?)

\[\begin{equation} \begin{split} \mathrm{v}_{t}(m_{t}, n_{t}) & = \max_{c_{t}, d_{t}} u(c_{t}) + \beta\mathbb{E}_{t} \left[ \mathrm{v}_{t+1}(m_{t+1}, n_{t+1}) \right] \\ & \text{s.t.} \quad c_{t} > 0, \quad d_{t} \ge 0 \\ a_{t} & = m_{t} - c_{t} - d_{t} \\ b_{t} & = n_{t} + d_{t} + g(d_{t}) \\ m_{t+1} & = a_{t} \mathsf{R}+ \theta_{t+1} \\ n_{t+1} & = b_{t} \mathbf{R}_{t+1} \end{split} \end{equation}\]

where

\[\begin{equation} \mathrm{u}(c) = \frac{c^{1-\rho}}{1-\rho} \qquad \text{and} \qquad \mathrm{g}(d) = \chi\log(1+d). \end{equation}\]

is a tax-advantaged premium on pension contributions.

G2EGM from
(Druedahl2017-ac?)

  • If we try to use EGM:
    • 2 first order conditions
    • multiple constraints difficult to handle
    • segments: combinations of first order conditions and constraints
    • \(2^{d}\) segments where \(d\) is number of control variables
    • requires local triangulation interpolation

Breaking up the problem makes it easier

Consider the problem of a consumer who chooses how much to put into a pension account:

\[\begin{equation} \begin{split} \mathrm{v}_{t}(m_{t}, n_{t}) & = \max_{d_{t}} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) \\ & \text{s.t.} \quad 0 \le d_{t} \le m_t \\ l_{t} & = m_{t} - d_{t} \\ b_{t} & = n_{t} + d_{t} + g(d_{t}) \end{split} \end{equation}\]

After, the consumer chooses how much to consume out of liquid savings:

\[\begin{equation} \begin{split} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) & = \max_{c_{t}} u(c_{t}) + \beta\mathrm{w}_{t}(a_{t}, b_{t}) \\ & \text{s.t.} \quad 0 < c_{t} \le m_t \\ a_{t} & = l_{t} - c_{t} \end{split} \end{equation}\]

Breaking up the problem makes it easier

Consider the problem of a consumer who chooses how much to put into a pension account:

\[\begin{equation} \begin{split} \mathrm{v}_{t}(m_{t}, n_{t}) & = \max_{d_{t}} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) \\ & \text{s.t.} \quad 0 \le d_{t} \le m_t \\ l_{t} & = m_{t} - d_{t} \\ b_{t} & = n_{t} + d_{t} + g(d_{t}) \end{split} \end{equation}\]

After, the consumer chooses how much to consume out of liquid savings:

\[\begin{equation} \begin{split} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) & = \max_{c_{t}} u(c_{t}) + \beta\mathrm{w}_{t}(a_{t}, b_{t}) \\ & \text{s.t.} \quad 0 < c_{t} \le m_t \\ a_{t} & = l_{t} - c_{t} \end{split} \end{equation}\]

And the post-decision value function is defined as:

\[\begin{equation} \begin{split} \mathrm{w}_t(a_t, b_t) & = \mathbb{E}_{t} \left[ \mathrm{v}_{t+1}(m_{t+1}, n_{t+1}) \right] \\ & \text{s.t.} \\ m_{t+1} & = a_{t} \mathsf{R}+ \theta_{t+1} \\ n_{t+1} & = b_{t} \mathbf{R}_{t+1} \end{split} \end{equation}\]

Breaking up the problem makes it easier

Consider the problem of a consumer who chooses how much to put into a pension account:

\[\begin{equation} \begin{split} \mathrm{v}_{t}(m_{t}, n_{t}) & = \max_{d_{t}} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) \\ & \text{s.t.} \quad 0 \le d_{t} \le m_t \\ l_{t} & = m_{t} - d_{t} \\ b_{t} & = n_{t} + d_{t} + g(d_{t}) \end{split} \end{equation}\]

After, the consumer chooses how much to consume out of liquid savings:

\[\begin{equation} \begin{split} \tilde{\mathfrak{v}}_{t}(l_{t}, b_{t}) & = \max_{c_{t}} u(c_{t}) + \beta\mathrm{w}_{t}(a_{t}, b_{t}) \\ & \text{s.t.} \quad 0 < c_{t} \le m_t \\ a_{t} & = l_{t} - c_{t} \end{split} \end{equation}\]

And the post-decision value function is defined as:

\[\begin{equation} \begin{split} \mathrm{w}_t(a_t, b_t) & = \mathbb{E}_{t} \left[ \mathrm{v}_{t+1}(m_{t+1}, n_{t+1}) \right] \\ & \text{s.t.} \\ m_{t+1} & = a_{t} \mathsf{R}+ \theta_{t+1} \\ n_{t+1} & = b_{t} \mathbf{R}_{t+1} \end{split} \end{equation}\]

Steps:

  1. Compute \(\mathrm{w}_t(a_t, b_t)\)
  2. Solve consumption problem (EGM)
  3. Solve pension problem (EGM, again)
  4. Done!

Solving the pension problem

The pension problem, more compactly

\[\begin{equation} \mathrm{v}_{t}(m_{t}, n_{t}) = \max_{d_{t}} \tilde{\mathfrak{v}}_{t}(m_{t} - d_{t}, n_{t} + d_{t} + \mathrm{g}(d_{t})) \end{equation}\]

Interior solution must satisfy the first-order condition:

\[\begin{equation} \mathrm{g}'(d_{t}) = \frac{\tilde{\mathfrak{v}}_{t}^{l}(l_{t}, b_{t})}{\tilde{\mathfrak{v}}_{t}^{b}(l_{t}, b_{t})} - 1 \end{equation}\]

Inverting, we can obtain the optimal choice of \(d_{t}\):

\[\begin{equation} \mathfrak{d}_{t}(l_{t}, b_{t}) = \mathrm{g}'^{-1}\left( \frac{\tilde{\mathfrak{v}}_{t}^{l}(l_{t}, b_{t})}{\tilde{\mathfrak{v}}_{t}^{b}(l_{t}, b_{t})} - 1 \right) \end{equation}\]

Solving the pension problem

Inverting, we can obtain the optimal choice of \(d_{t}\):

\[\begin{equation} \mathfrak{d}_{t}(l_{t}, b_{t}) = \mathrm{g}'^{-1}\left( \frac{\tilde{\mathfrak{v}}_{t}^{l}(l_{t}, b_{t})}{\tilde{\mathfrak{v}}_{t}^{b}(l_{t}, b_{t})} - 1 \right) \end{equation}\]

Using resource constraints we obtain endogenous grids:

\[\begin{equation} \mathfrak{n}_{t}(l_{t}, b_{t}) = b_{t} - \mathfrak{d}_{t}(l_{t}, b_{t}) - \mathrm{g}(\mathfrak{d}_{t}(l_{t}, b_{t})) \\ \mathfrak{m}_{t}(l_{t}, b_{t}) = l_{t} + \mathfrak{d}_{t}(l_{t}, b_{t}) \end{equation}\]

Now we have the triple \(\{\mathfrak{m}_t, \mathfrak{n}_t, \mathfrak{d}_t\}\) where \(\mathfrak{d}_t\) is the unconstrained approx. of optimal deposit for each \((\mathfrak{m}_t, \mathfrak{n}_t)\) corresponding to each \((l_t, b_t)\). Generally, we can construct an interpolator as follows:

\[\begin{equation} \hat{d_t}(\mathfrak{m}_t, \mathfrak{n}_t) = \begin{cases} 0 & \text{if } \mathfrak{d}_t < 0 \\ \mathfrak{d}_t & \text{if } 0 \le \mathfrak{d}_t \le \mathfrak{m}_t \\ \mathfrak{m}_t & \text{if } \mathfrak{d}_t > \mathfrak{m}_t \end{cases} \end{equation}\]

Unstructured Grids

Problem: Rectilinear exogenous grid results in unstructured endogenous grid

Exogenous Rectangular Grid

2024-02-29T17:52:04.992927 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

Sparse Pension Exogenous Grid

Endogenous Unstructured Grid

Unstructured Pension Endogenous Grid

How do we interpolate on this grid?

Functional Approximation

Linear Interpolation on a Uniform Grid

Linear Interpolation on a Non-linear Grid

Bilinear Interpolation

Curvilinear (Warped) Grid Interpolation

See: (White2015?)

What about Unstructured Grids?

See: (Ludwig2018?)

Artificial Neural Networks

Figure 1: ANN (Source: scikit-learn.org)

Artificial Neural Networks

Figure 1: ANN (Source: scikit-learn.org)
  • Based on biological neural pathways (neurons in a brain)
  • Learns function \(f(X): R^n \rightarrow R^m\)
  • Consists of
    • input (features) \(X\)
    • hidden layers \(g(\cdots)\)
    • output (target) \(y = f(X)\)
  • Hidden layers can have many nodes
  • Neural nets can have many hidden layers (deep learning)

A single neuron, and a bit of math

Figure 2: Perceptron

\[\begin{equation} y = g(w_0 + \sum_{i=1}^n w_i x_i) = g(w_0 + \mathbf{x}' \mathbf{w}) \end{equation}\]

A single neuron, and a bit of math

Figure 2: Perceptron

\[\begin{equation} y = g(w_0 + \sum_{i=1}^n w_i x_i) = g(w_0 + \mathbf{x}' \mathbf{w}) \end{equation}\]

  • \(y\) is the output or target
  • \(x_i\) are the inputs or features
  • \(w_0\) is the bias
  • \(w_i\) are the weights
  • \(g(\cdot)\) is the activation function (non-linear) \[\begin{equation} g(z) = \frac{1}{1 + e^{-z}} \end{equation}\]
  • usually a sigmoid, but there are many others

Training a Neural Network

Mean Squared Error (MSE)

\[\begin{equation} J(\mathbf{w}) = \frac{1}{2n} \sum_{i=1}^n \left[y_i - f(\mathbf{x}^{(i)}; \mathbf{w})\right]^2 \end{equation}\]

Objective

\[\begin{equation} \mathbf{w}^* = \arg \min_{\mathbf{w}} J(\mathbf{w}) \end{equation}\]

Training a Neural Network

Mean Squared Error (MSE)

\[\begin{equation} J(\mathbf{w}) = \frac{1}{2n} \sum_{i=1}^n \left[y_i - f(\mathbf{x}^{(i)}; \mathbf{w})\right]^2 \end{equation}\]

Objective

\[\begin{equation} \mathbf{w}^* = \arg \min_{\mathbf{w}} J(\mathbf{w}) \end{equation}\]

Gradient Descent

\[\begin{equation} \mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla J(\mathbf{w}^{(t)}) \end{equation}\]

Training a Neural Network

Mean Squared Error (MSE)

\[\begin{equation} J(\mathbf{w}) = \frac{1}{2n} \sum_{i=1}^n \left[y_i - f(\mathbf{x}^{(i)}; \mathbf{w})\right]^2 \end{equation}\]

Objective

\[\begin{equation} \mathbf{w}^* = \arg \min_{\mathbf{w}} J(\mathbf{w}) \end{equation}\]

Stochastic Gradient Descent

\[\begin{equation} \widetilde{\nabla} J(\mathbf{w}^{(t)}) = \frac{1}{B} \sum_{i=1}^B \nabla J_i(\mathbf{w}^{(t)}) \end{equation}\]

\[\begin{equation} \mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \widetilde{\nabla} J(\mathbf{w}^{(t)}) \end{equation}\]

Gaussian Process Regression

A Gaussian Process is a probability distribution over functions

\[\begin{equation} \begin{gathered} f(x) \sim \mathcal{GP}(m(x), k(x, x')) \\ \text{where} \quad m(x) = \mathbb{E}[f(x)] \\ \text{and} \quad k(x, x') = \mathbb{E}[(f(x) - m(x))(f(x') - m(x'))] \end{gathered} \end{equation}\]

A Gaussian Process Regression is used to find the function that best fits a set of data points

\[\begin{equation} \mathbb{P}(\mathbf{f} | \mathbf{X}) = \mathcal{N}(\mathbf{f} | \mathbf{m}, \mathbf{K}) \end{equation}\]

I use standard covariance function, exploring alternatives is an active area of research

\[\begin{equation} k(\mathbf{x}_i, \mathbf{x}_j) = \sigma^2_f \exp\left(-\frac{1}{2l^2} (\mathbf{x}_i - \mathbf{x}_j)' (\mathbf{x}_i - \mathbf{x}_j)\right). \end{equation}\]

Approximation

Note

Gaussian Processes

  • mathematically equivalent to neural networks with an infinite width
  • do not require as much data as neural networks
  • offers uncertainty quantification of the mean function
  • can be used to approximate any function arbitrarily closely

Universal Approximation Theorem

A single hidden-layer ANN can approximate any continuous function arbitrarily closely as the number of neurons in the hidden layer increases.

An example

Consider the true function \(f(x) = x \cos(1.5x)\) sampled at random points

2024-02-29T17:52:23.844619 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

True Function

An example

A random sample of the GP posterior distribution of functions

2024-02-29T17:52:25.025929 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

Posterior Sample

An example

Gaussian Process Regression finds the function that best fits the data

2024-02-29T17:52:24.497884 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/
  • Gaussian Process Regression gives us
    • Mean function of the posterior distribution
    • Uncertainty quantification of the mean function
    • Can be useful to predict where we might need more points and update the grid

Back to the model

Second Stage Pension Endogenous Grid

2024-02-29T17:52:02.429345 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

2024-02-29T17:52:03.810847 image/svg+xml Matplotlib v3.8.3, https://matplotlib.org/

Some Results

Consumption Function

2023-07-07T06:26:14.387497 image/svg+xml Matplotlib v3.7.1, https://matplotlib.org/

Pension Consumption Function

Deposit Function

2023-07-07T06:26:13.093375 image/svg+xml Matplotlib v3.7.1, https://matplotlib.org/

Penstion Deposit Function

Conclusion

Conditions for using Sequential EGM

  • Model must be
    • concave
    • differentiable
    • continuous
    • separable

Need an additional function to exploit invertibility

Examples in this paper:

  • Separable utility function
    • \(\mathrm{u}(c, z) = \mathrm{u}(c) + h(z)\)
  • Continuous and differentiable transition
    • \(b_{t} = n_{t} + d_{t} + g(d_{t})\)

Resources

Thank you!

Powered by: Econ-ARKPowered byEcon-ARK

engine: github.com/econ-ark/HARK

code: github.com/alanlujan91/SequentialEGM

website: alanlujan91.github.io/SequentialEGM/egmn

References